众所周知,很难拥有一个可靠且强大的框架来将多代理深入强化学习算法与实用的多机器人应用联系起来。为了填补这一空白,我们为称为MultiroBolearn1的多机器人系统提出并构建了一个开源框架。该框架构建了统一的模拟和现实应用程序设置。它旨在提供标准的,易于使用的模拟方案,也可以轻松地将其部署到现实世界中的多机器人环境中。此外,该框架为研究人员提供了一个基准系统,以比较不同的强化学习算法的性能。我们使用不同类型的多代理深钢筋学习算法在离散和连续的动作空间中使用不同类型的多代理深钢筋学习算法来证明框架的通用性,可扩展性和能力。
translated by 谷歌翻译
深度估计是某些领域的关键技术之一,例如自动驾驶和机器人导航。但是,使用单个传感器的传统方法不可避免地受到传感器的性能的限制。因此,提出了一种融合激光镜头和立体声摄像机的精度和健壮方法。该方法完全结合了LiDAR和立体声摄像机的优势,这些摄像头可以保留LIDAR高精度和图像的高分辨率的优势。与传统的立体声匹配方法相比,对象和照明条件的质地对算法的影响较小。首先,将LIDAR数据的深度转换为立体声摄像机的差异。由于LiDAR数据的密度在Y轴上相对稀疏,因此使用插值方法对转换的差异图进行了更采样。其次,为了充分利用精确的差异图,融合了差异图和立体声匹配以传播准确的差异。最后,将视差图转换为深度图。此外,转换后的差异图还可以提高算法的速度。我们在Kitti基准测试中评估了拟议的管道。该实验表明,我们的算法比几种经典方法具有更高的精度。
translated by 谷歌翻译
随着深度学习的兴起,视频对象细分(VOS)取得了重大进展。但是,仍然存在一些棘手的问题,例如,类似的对象很容易混淆,很难找到微小的对象。为了解决这些问题并进一步提高VOS的性能,我们为这项任务提出了一个简单而有效的解决方案。在解决方案中,我们首先分析YouTube-VOS数据集的分布,并通过引入公共静态和视频分割数据集来补充数据集。然后,我们改善了具有不同特征的三个网络体系结构,并训练多个网络以学习视频中对象的不同特征。之后,我们使用一种简单的方法来集成所有结果,以确保不同的模型相互补充。最后,进行了微妙的后处理,以确保具有精确边界的准确视频对象分割。 YouTube-VOS数据集的大量实验表明,该建议的解决方案在YouTube-VOS 2022测试集上以86.1%的总分达到了最先进的性能,这是YouTube视频对象细分的第五名-VOS挑战2022。
translated by 谷歌翻译
虽然现实世界的增强学习应用程序(RL)越来越流行,但安全性和RL系统的鲁棒性需要更多的关注。最近的一项工作表明,在多代理RL环境中,可以将后门触发动作注入受害者(又称Trojan特工),这可能会在看到后门触发动作后立即导致灾难性故障。我们提出了RL后门检测的问题,旨在解决此安全漏洞。我们从广泛的经验研究中得出的一个有趣的观察是一种触发平滑性属性,与后门触发动作相似,正常动作也可以触发特洛伊木马的性能低。受到这一观察的启发,我们提出了一种加强学习解决方案Trojanseeker为特洛伊木马的代理找到近似触发作用,并进一步提出了一种有效的方法,以根据机器的学习来减轻特洛伊木马。实验表明,我们的方法可以正确区分和减轻各种类型的代理和环境中的所有特洛伊木马代理。
translated by 谷歌翻译
被证明深度神经网络(DNN)被证明是易受后门攻击的影响。后门通常通过将后门触发注入训练示例中的目标DNN嵌入到目标DNN中,这可能导致目标DNN消除附加的输入附加的输入。现有的后门检测方法通常需要访问原始中毒训练数据,目标DNN的参数,或对每个给定输入的预测置信度,这在许多实际应用中是不切实际的,例如,在设备上部署的DNN。我们地址DNN是完全黑盒的黑匣子硬标签检测问题,只能访问其最终输出标签。我们从优化角度方面接近这个问题,并表明回程检测的目标受到对抗目标的界定。进一步的理论和实证研究表明,这种对抗性物镜导致具有高度偏斜分布的溶液;在后门感染的例子的对抗性地图中经常观察到奇点,我们称之为对抗性奇点现象。基于该观察,我们提出了对抗极值分析(AEVA)来检测黑匣子神经网络中的后门。 AEVA基于来自Monte-Carlo梯度估计计算的对抗地图的极值分析。在多个流行的任务和后门攻击中通过广泛的实验证明,我们的方法有效地检测了黑匣子硬标的场景下的后门攻击。
translated by 谷歌翻译
通过增强模型,输入示例,培训集和优化目标,已经提出了各种方法进行分发(OOD)检测。偏离现有工作,我们有一个简单的假设,即标准的离心模型可能已经包含有关训练集分布的足够信息,这可以利用可靠的ood检测。我们对验证这一假设的实证研究,该假设测量了模型激活的模型和分布(ID)迷你批次,发现OOD Mini-Batches的激活手段一直偏离培训数据的培训数据。此外,培训数据的激活装置可以从批量归一化层作为“自由午餐”中有效地计算或从批量归一化层次上检索。基于该观察,我们提出了一种名为神经平均差异(NMD)的新型度量,其比较了输入示例和训练数据的神经手段。利用NMD的简单性,我们提出了一种有效的OOD探测器,通过标准转发通道来计算神经手段,然后是轻量级分类器。广泛的实验表明,在检测精度和计算成本方面,NMD跨越多个数据集和模型架构的最先进的操作。
translated by 谷歌翻译
Blind image quality assessment (BIQA) remains challenging due to the diversity of distortion and image content variation, which complicate the distortion patterns crossing different scales and aggravate the difficulty of the regression problem for BIQA. However, existing BIQA methods often fail to consider multi-scale distortion patterns and image content, and little research has been done on learning strategies to make the regression model produce better performance. In this paper, we propose a simple yet effective Progressive Multi-Task Image Quality Assessment (PMT-IQA) model, which contains a multi-scale feature extraction module (MS) and a progressive multi-task learning module (PMT), to help the model learn complex distortion patterns and better optimize the regression issue to align with the law of human learning process from easy to hard. To verify the effectiveness of the proposed PMT-IQA model, we conduct experiments on four widely used public datasets, and the experimental results indicate that the performance of PMT-IQA is superior to the comparison approaches, and both MS and PMT modules improve the model's performance.
translated by 谷歌翻译
It has been observed in practice that applying pruning-at-initialization methods to neural networks and training the sparsified networks can not only retain the testing performance of the original dense models, but also sometimes even slightly boost the generalization performance. Theoretical understanding for such experimental observations are yet to be developed. This work makes the first attempt to study how different pruning fractions affect the model's gradient descent dynamics and generalization. Specifically, this work considers a classification task for overparameterized two-layer neural networks, where the network is randomly pruned according to different rates at the initialization. It is shown that as long as the pruning fraction is below a certain threshold, gradient descent can drive the training loss toward zero and the network exhibits good generalization performance. More surprisingly, the generalization bound gets better as the pruning fraction gets larger. To complement this positive result, this work further shows a negative result: there exists a large pruning fraction such that while gradient descent is still able to drive the training loss toward zero (by memorizing noise), the generalization performance is no better than random guessing. This further suggests that pruning can change the feature learning process, which leads to the performance drop of the pruned neural network. Up to our knowledge, this is the \textbf{first} generalization result for pruned neural networks, suggesting that pruning can improve the neural network's generalization.
translated by 谷歌翻译
Time-series anomaly detection is an important task and has been widely applied in the industry. Since manual data annotation is expensive and inefficient, most applications adopt unsupervised anomaly detection methods, but the results are usually sub-optimal and unsatisfactory to end customers. Weak supervision is a promising paradigm for obtaining considerable labels in a low-cost way, which enables the customers to label data by writing heuristic rules rather than annotating each instance individually. However, in the time-series domain, it is hard for people to write reasonable labeling functions as the time-series data is numerically continuous and difficult to be understood. In this paper, we propose a Label-Efficient Interactive Time-Series Anomaly Detection (LEIAD) system, which enables a user to improve the results of unsupervised anomaly detection by performing only a small amount of interactions with the system. To achieve this goal, the system integrates weak supervision and active learning collaboratively while generating labeling functions automatically using only a few labeled data. All of these techniques are complementary and can promote each other in a reinforced manner. We conduct experiments on three time-series anomaly detection datasets, demonstrating that the proposed system is superior to existing solutions in both weak supervision and active learning areas. Also, the system has been tested in a real scenario in industry to show its practicality.
translated by 谷歌翻译
As an important variant of entity alignment (EA), multi-modal entity alignment (MMEA) aims to discover identical entities across different knowledge graphs (KGs) with multiple modalities like images. However, current MMEA algorithms all adopt KG-level modality fusion strategies but ignore modality differences among individual entities, hurting the robustness to potential noise involved in modalities (e.g., unidentifiable images and relations). In this paper we present MEAformer, a multi-modal entity alignment transformer approach for meta modality hybrid, to dynamically predict the mutual correlation coefficients among modalities for instance-level feature fusion. A modal-aware hard entity replay strategy is also proposed for addressing vague entity details. Extensive experimental results show that our model not only achieves SOTA performance on multiple training scenarios including supervised, unsupervised, iterative, and low resource, but also has limited parameters, optimistic speed, and good interpretability. Our code will be available soon.
translated by 谷歌翻译